2 research outputs found

    Augmenting the performance of image similarity search through crowdsourcing

    Get PDF
    Crowdsourcing is defined as “outsourcing a task that is traditionally performed by an employee to a large group of people in the form of an open call” (Howe 2006). Many platforms designed to perform several types of crowdsourcing and studies have shown that results produced by crowds in crowdsourcing platforms are generally accurate and reliable. Crowdsourcing can provide a fast and efficient way to use the power of human computation to solve problems that are difficult for machines to perform. From several different microtasking crowdsourcing platforms available, we decided to perform our study using Amazon Mechanical Turk. In the context of our research we studied the effect of user interface design and its corresponding cognitive load on the performance of crowd-produced results. Our results highlighted the importance of a well-designed user interface on crowdsourcing performance. Using crowdsourcing platforms such as Amazon Mechanical Turk, we can utilize humans to solve problems that are difficult for computers, such as image similarity search. However, in tasks like image similarity search, it is more efficient to design a hybrid human–machine system. In the context of our research, we studied the effect of involving the crowd on the performance of an image similarity search system and proposed a hybrid human–machine image similarity search system. Our proposed system uses machine power to perform heavy computations and to search for similar images within the image dataset and uses crowdsourcing to refine results. We designed our content-based image retrieval (CBIR) system using SIFT, SURF, SURF128 and ORB feature detector/descriptors and compared the performance of the system using each feature detector/descriptor. Our experiment confirmed that crowdsourcing can dramatically improve the CBIR system performance

    Crowdsourcing, Cognitive Load, and User Interface Design

    Get PDF
    Harnessing human computation through crowdsourcing offers a new approach to solving complex problems, especially those that are easy for humans but difficult for computers. Micro-tasking platforms such as Amazon Mechanical Turk have attracted a large on-demand workforce of millions of workers as well as hundreds of thousands of requesters. Achieving high quality results and minimizing the total task execution times are the two main goals of these crowdsourcing systems. In this paper we study the effects of cognitive load and complexity of user interface design on work quality and the latency of system. Our results indicate that complex and poorly designed user interfaces contributed to lower worker performance and increased latency
    corecore